On Dimension-independent Rates of Convergence for Function Approximation with Gaussian Kernels

نویسندگان

  • Gregory E. Fasshauer
  • Fred J. Hickernell
  • Henryk Wozniakowski
چکیده

This article studies the problem of approximating functions belonging to a Hilbert space Hd with an isotropic or anisotropic translation invariant (or stationary) reproducing kernel with special attention given to the Gaussian kernel Kd(x, t) = exp ( − d ∑ `=1 γ ` (x` − t`) 2 ) for all x, t ∈ R. The isotropic (or radial) case corresponds to using the same shape parameters for all coordinates, namely γ` = γ > 0 for all `, whereas the anisotropic case corresponds to varying shape parameters γ`. It is known that the optimal approximation algorithm, called a meshfree or kriging method, yields approximation errors that, for fixed d, can decay faster than any polynomial in n−1, where n is the number of data points. We are especially interested in moderate to large d, which in particular arise in the construction of surrogates for computer experiments. This article presents dimension-independent error bounds, i.e., the error is bounded by Cn−p, where C and p are independent of d and n, and the bound holds for all d and n. This is equivalent to strong polynomial tractability, a subject which has been nowadays thoroughly studied for multivariate problems. The pertinent error criterion is the worst case of such an algorithm over the unit ball in Hd, with the error for a single function given by the L2 norm whose weight is also a Gaussian which is used to “localize” Rd. We consider two classes of algorithms: (1) using data generated by finitely many arbitrary linear functionals, (2) using only finitely many function values. Provided that arbitrary linear functional data is available, we show p = 1/2 is possible for any translation invariant positive definite kernel (Theorem 5.1 and the remarks following it). We also consider the sequence of shape parameters γ` decaying to zero like ` −ω as ` (and therefore also d) tends to ∞. Note that for large ω this means that the function to be approximated is “essentially low-dimensional”. Then the largest p is roughly max(1/2, ω) (Theorem 5.2). If only function values are available, dimension-independent convergence rates are somewhat worse (Theorems 5.3 and 5.4). If the goal is to make the error smaller than Cn−p times the initial (n = 0) error, then the corresponding dimension-independent exponent p is roughly ω (Theorem 6.2 and Corollary 6.4). In particular, for the isotropic case, when ω = 0, the error does not even decay polynomially with n−1 (Theorem 6.1). In summary, excellent dimension-independent error decay rates are only possible when the sequence of shape parameters decays rapidly. AMS subject classifications. 65D15, 68Q17, 41A25, 41A63

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

SVM Learning and L Approximation by Gaussians on Riemannian Manifolds

We confirm by the multi-Gaussian support vector machine (SVM) classification that the intrinsic dimension of Riemannian manifolds improves the efficiency (learning rates) of learning algorithms. The essential analysis lies in the study of approximation in Lp (1 ≤ p < ∞) of Lp functions by their convolutions with the Gaussian kernel with variance σ → 0. This covers the SVM case when the approxim...

متن کامل

Average Case Approximation: Convergence and Tractability of Gaussian Kernels

We study the problem of approximating functions of d variables in the average case setting for a separable Banach space Fd equipped with a zero-mean Gaussian measure. The covariance kernel of this Gaussian measure takes the form of a Gaussian that depends on shape parameters γl. We stress that d can be arbitrarily large. Our approximation error is defined in the L2 norm, and we study the minima...

متن کامل

A Flexible Link Radar Control Based on Type-2 Fuzzy Systems

An adaptive neuro fuzzy inference system based on interval Gaussian type-2 fuzzy sets in the antecedent part and Gaussian type-1 fuzzy sets as coefficients of linear combination of input variables in the consequent part is presented in this paper. The capability of the proposed method (we named ANFIS2) for function approximation and dynamical system identification is remarkable. The structure o...

متن کامل

Rate of Convergence and Tractability of the Radial Function Approximation Problem

This article studies the problem of approximating functions belonging to a Hilbert space Hd with an isotropic or anisotropic Gaussian reproducing kernel, Kd(x, t) = exp ( − d ∑ l=1 γ l (xl − tl)2 ) for all x, t ∈ R. The isotropic case corresponds to using the same shape parameters for all coordinates, namely γl = γ > 0 for all l, whereas the anisotropic case corresponds to varying shape paramet...

متن کامل

COLLOCATION METHOD FOR FREDHOLM-VOLTERRA INTEGRAL EQUATIONS WITH WEAKLY KERNELS

In this paper it is shown that the use of‎ ‎uniform meshes leads to optimal convergence rates provided that‎ ‎the analytical solutions of a particular class of‎ ‎Fredholm-Volterra integral equations (FVIEs) are smooth‎.

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • SIAM J. Numerical Analysis

دوره 50  شماره 

صفحات  -

تاریخ انتشار 2012